When Does Aid Conditionality Work?

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incentives, Inequality and the Allocation of Aid When Conditionality Doesn't Work: an Optimal Nonlinear Taxation Approach

This paper analyses the impact of aid, and its optimal allocation, when conditionality is ineffective. It is assumed that the recipient government will implement its own preferences no matter what. In this set up, aid can still affect the behavior of a recipient, not through conditionality but through changing resource constraints. We analyze the problem in the tradition of models of optimal no...

متن کامل

When Does Active Learning Work?

Active Learning (AL) methods seek to improve classifier performance when labels are expensive or scarce. We consider two central questions: Where does AL work? How much does it help? To address these questions, a comprehensive experimental simulation study of Active Learning is presented. We consider a variety of tasks, classifiers and other AL factors, to present a broad exploration of AL perf...

متن کامل

When Does Quasi-random Work?

[10, 22] presented various ways for introducing quasi-random numbers or derandomization in evolution strategies, with in some cases some spectacular claims on the fact that the proposed technique was always and for all criteria better than standard mutations. We here focus on the quasi-random trick and see to which extent this technique is efficient, by an in-depth analysis including convergenc...

متن کامل

When tabling does not work

Tabled execution has been successfully applied in various domains such as program analysis, model checking, parsing, . . . A recent target of tabling is the optimization of Inductive Logic Programming. Due to the iterative nature of ILP algorithms, queries evaluated by these algorithms typically show a lot of similarity. To avoid repeated execution of identical parts of queries, answers to the ...

متن کامل

When Does Stochastic Gradient Algorithm Work Well?

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhoo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Studies in Comparative International Development

سال: 2010

ISSN: 0039-3606,1936-6167

DOI: 10.1007/s12116-010-9068-6